617 research outputs found

    An efficient graph representation for arithmetic circuit verification

    Full text link

    Formal Verification of Infinite State Systems Using Boolean Methods

    Full text link

    Pyridazine-bridged cationic diiridium complexes as potential dual-mode bioimaging probes

    Get PDF
    A novel diiridium complex [(N^C^N)2Ir(bis-N^C)Ir(N^C^N)2Cl]PF6 (N^C^N = 2-[3-tert-butyl-5-(pyridin-2-yl)phenyl]pyridine; bis-N^C = 3,6-bis(4-tert-butylphenyl)pyridazine) was designed, synthesised and characterised. The key feature of the complex is the bridging pyridazine ligand which brings two cyclometallated Ir(III) metal centres close together so that Cl also acts as a bridging ligand leading to a cationic complex. The ionic nature of the complex offers a possibility of improving solubility in water. The complex displays broad emission in the red region (λem = 520–720 nm, τ = 1.89 μs, Φem = 62% in degassed acetonitrile). Cellular assays by multiphoton (λex = 800 nm) and confocal (λex = 405 nm) microscopy demonstrate that the complex enters cells and localises to the mitochondria, demonstrating cell permeability. Further, an appreciable yield of singlet oxygen generation (ΦΔ = 0.45, direct method, by 1O2 NIR emission in air equilibrated acetonitrile) suggests a possible future use in photodynamic therapy. However, the complex has relatively high dark toxicity (LD50 = 4.46 μM), which will likely hinder its clinical application. Despite this toxicity, the broad emission spectrum of the complex and high emission yield observed suggest a possible future use of this class of compound in emission bioimaging. The presence of two heavy atoms also increases the scattering of electrons, supporting potential future applications as a dual fluorescence and electron microscopy probe

    A Hardware Architecture for Switch-Level Simulation

    Full text link

    Processing Succinct Matrices and Vectors

    Full text link
    We study the complexity of algorithmic problems for matrices that are represented by multi-terminal decision diagrams (MTDD). These are a variant of ordered decision diagrams, where the terminal nodes are labeled with arbitrary elements of a semiring (instead of 0 and 1). A simple example shows that the product of two MTDD-represented matrices cannot be represented by an MTDD of polynomial size. To overcome this deficiency, we extended MTDDs to MTDD_+ by allowing componentwise symbolic addition of variables (of the same dimension) in rules. It is shown that accessing an entry, equality checking, matrix multiplication, and other basic matrix operations can be solved in polynomial time for MTDD_+-represented matrices. On the other hand, testing whether the determinant of a MTDD-represented matrix vanishes PSPACE$-complete, and the same problem is NP-complete for MTDD_+-represented diagonal matrices. Computing a specific entry in a product of MTDD-represented matrices is #P-complete.Comment: An extended abstract of this paper will appear in the Proceedings of CSR 201

    Efficient Set Sharing Using ZBDDs

    Get PDF
    Set sharing is an abstract domain in which each concrete object is represented by the set of local variables from which it might be reachable. It is a useful abstraction to detect parallelism opportunities, since it contains definite information about which variables do not share in memory, i.e., about when the memory regions reachable from those variables are disjoint. Set sharing is a more precise alternative to pair sharing, in which each domain element is a set of all pairs of local variables from which a common object may be reachable. However, the exponential complexity of some set sharing operations has limited its wider application. This work introduces an efficient implementation of the set sharing domain using Zero-suppressed Binary Decision Diagrams (ZBDDs). Because ZBDDs were designed to represent sets of combinations (i.e., sets of sets), they naturally represent elements of the set sharing domain. We show how to synthesize the operations needed in the set sharing transfer functions from basic ZBDD operations. For some of the operations, we devise custom ZBDD algorithms that perform better in practice. We also compare our implementation of the abstract domain with an efficient, compact, bit set-based alternative, and show that the ZBDD version scales better in terms of both memory usage and running time

    Parallel Recursive State Compression for Free

    Get PDF
    This paper focuses on reducing memory usage in enumerative model checking, while maintaining the multi-core scalability obtained in earlier work. We present a tree-based multi-core compression method, which works by leveraging sharing among sub-vectors of state vectors. An algorithmic analysis of both worst-case and optimal compression ratios shows the potential to compress even large states to a small constant on average (8 bytes). Our experiments demonstrate that this holds up in practice: the median compression ratio of 279 measured experiments is within 17% of the optimum for tree compression, and five times better than the median compression ratio of SPIN's COLLAPSE compression. Our algorithms are implemented in the LTSmin tool, and our experiments show that for model checking, multi-core tree compression pays its own way: it comes virtually without overhead compared to the fastest hash table-based methods.Comment: 19 page

    Special section on advances in reachability analysis and decision procedures: contributions to abstraction-based system verification

    No full text
    Reachability analysis asks whether a system can evolve from legitimate initial states to unsafe states. It is thus a fundamental tool in the validation of computational systems - be they software, hardware, or a combination thereof. We recall a standard approach for reachability analysis, which captures the system in a transition system, forms another transition system as an over-approximation, and performs an incremental fixed-point computation on that over-approximation to determine whether unsafe states can be reached. We show this method to be sound for proving the absence of errors, and discuss its limitations for proving the presence of errors, as well as some means of addressing this limitation. We then sketch how program annotations for data integrity constraints and interface specifications - as in Bertrand Meyers paradigm of Design by Contract - can facilitate the validation of modular programs, e.g., by obtaining more precise verification conditions for software verification supported by automated theorem proving. Then we recap how the decision problem of satisfiability for formulae of logics with theories - e.g., bit-vector arithmetic - can be used to construct an over-approximating transition system for a program. Programs with data types comprised of bit-vectors of finite width require bespoke decision procedures for satisfiability. Finite-width data types challenge the reduction of that decision problem to one that off-the-shelf tools can solve effectively, e.g., SAT solvers for propositional logic. In that context, we recall the Tseitin encoding which converts formulae from that logic into conjunctive normal form - the standard format for most SAT solvers - with only linear blow-up in the size of the formula, but linear increase in the number of variables. Finally, we discuss the contributions that the three papers in this special section make in the areas that we sketched above. © Springer-Verlag 2009

    Data-structures for the verification of timed automata

    Full text link

    Sawja: Static Analysis Workshop for Java

    Get PDF
    Static analysis is a powerful technique for automatic verification of programs but raises major engineering challenges when developing a full-fledged analyzer for a realistic language such as Java. This paper describes the Sawja library: a static analysis framework fully compliant with Java 6 which provides OCaml modules for efficiently manipulating Java bytecode programs. We present the main features of the library, including (i) efficient functional data-structures for representing program with implicit sharing and lazy parsing, (ii) an intermediate stack-less representation, and (iii) fast computation and manipulation of complete programs
    corecore